Search results

1 – 10 of 12
Content available

Abstract

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Article
Publication date: 29 June 2010

Elhadi Shakshuki and Abdur Rafey Matin

Intelligent agents are becoming an essential part of collaborative virtual environments. The purpose of this paper is to present an architecture of a learning agent that is able…

Abstract

Purpose

Intelligent agents are becoming an essential part of collaborative virtual environments. The purpose of this paper is to present an architecture of a learning agent that is able to utilize machine learning techniques to monitor the user's actions.

Design/methodology/approach

A learning agent is developed and integrated into federated collaborative virtual workspace.

Findings

The experimental results showed that the combination of genetic algorithms and reinforcement learning algorithms provides the agent with better learning capability resulting in better predictions for the user.

Originality/value

This paper provides experimental results and a performance analysis in terms of accuracy of predictions, processing time, and memory utilization of the agent.

Details

International Journal of Pervasive Computing and Communications, vol. 6 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 20 December 2007

Jamal Bentahar, Francesca Toni, John‐Jules Ch. Meyer and Jihad Labban

This paper aims to address some security issues in open systems such as service‐oriented applications and grid computing. It proposes a security framework for these systems taking…

Abstract

Purpose

This paper aims to address some security issues in open systems such as service‐oriented applications and grid computing. It proposes a security framework for these systems taking a trust viewpoint. The objective is to equip the entities in these systems with mechanisms allowing them to decide about trusting or not each other before starting transactions.

Design/methodology/approach

In this paper, the entities of open systems (web services, virtual organizations, etc.) are designed as software autonomous agents equipped with advanced communication and reasoning capabilities. Agents interact with one another by communicating using public dialogue game‐based protocols and strategies on how to use these protocols. These strategies are private to individual agents, and are defined in terms of dialogue games with conditions. Agents use their reasoning capabilities to evaluate these conditions and deploy their strategies. Agents compute the trust they have in other agents, represented as a subjective quantitative value, using direct and indirect interaction histories with these other agents and the notion of social networks.

Findings

The paper finds that trust is subject to many parameters such as the number of interactions between agents, the size of the social network, and the timely relevance of information. Combining these parameters provides a comprehensive trust model. The proposed framework is proved to be computationally efficient and simulations show that it can be used to detect malicious entities.

Originality/value

The paper proposes different protocols and strategies for trust computation and different parameters to consider when computing this trust. It proposes an efficient algorithm for this computation and a prototype simulating it.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 28 September 2007

Elhadi Shakshuki, Andreas Kerren and Tomasz Müldner

The purpose of this paper is to present the development of a system called Structured Hypermedia Algorithm Explanation (SHALEX), as a remedy for the limitations existing within…

Abstract

Purpose

The purpose of this paper is to present the development of a system called Structured Hypermedia Algorithm Explanation (SHALEX), as a remedy for the limitations existing within the current traditional algorithm animation (AA) systems. SHALEX provides several novel features, such as use of invariants, reflection of the high‐level structure of an algorithm rather than low‐level steps, and support for programming the algorithm in any procedural or object‐oriented programming language.

Design/methodology/approach

By defining the structure of an algorithm as a directed graph of abstractions, algorithms may be studied top‐down, bottom‐up, or using a mix of the two. In addition, SHALEX includes a learner model to provide spatial links, and to support evaluations and adaptations.

Findings

Evaluations of traditional AA systems designed to teach algorithms in higher education or in professional training show that such systems have not achieved many expectations of their developers. One reason for this failure is the lack of stimulating learning environments which support the learning process by providing features such as multiple levels of abstraction, support for hypermedia, and learner‐adapted visualizations. SHALEX supports these environments, and in addition provides persistent storage that can be used to analyze students' performance. In particular, this storage can be used to represent a student model that supports adaptive system behavior.

Research limitations/implications

SHALEX is being implemented and tested by the authors and a group of students. The tests performed so far have shown that SHALEX is a very useful tool. In the future additional quantitative evaluation is planned to compare SHALEX with other AA systems and/or the concept keyboard approach.

Practical implications

SHALEX has been implemented as a web‐based application using the client‐server architecture. Therefore students can use SHALEX to learn algorithms both through distance education and in the classroom setting.

Originality/value

This paper presents a novel algorithm explanation system for users who wish to learn algorithms.

Details

International Journal of Web Information Systems, vol. 3 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 20 December 2007

Carlos Eduardo de Barros Paes and Celso Massaki Hirata

Nowadays, most of the software development processes still does not provide appropriate support for the development of secure systems. Rational Unified Process (RUP) is a…

Abstract

Purpose

Nowadays, most of the software development processes still does not provide appropriate support for the development of secure systems. Rational Unified Process (RUP) is a well‐known software engineering process that provides a disciplined approach to assigning tasks and responsibilities; however, it has little support for development of secure systems. This work aims to present a proposal of RUP for the development of secure systems.

Design/methodology/approach

In order to obtain the proposed RUP, the authors consider security as a knowledge area (discipline) and they define workflow, activities and roles according to the architecture of process engineering Unified Method Architecture (UMA). A software development was used to assess qualitatively the extended RUP.

Findings

Based on the development, the authors find that the proposed process produces security requirements in a more systematic way and results in the definition of better system architecture.

Research limitations/implications

The proposed extension requires specific adaptation if other development processes such as agile process and waterfall are employed.

Practical implications

The extension facilitates, the management of execution, and control of the activities and tasks related to security and the development teams can benefit by constructing better quality software.

Originality/value

The originality of the paper is the proposal of extension to RUP in order to consider security in a disciplined and organized way.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 20 December 2007

Darcy Benoit and André Trudel

To measure the exact size of the world wide web (i.e. a census). The measure used is the number of publicly accessible web servers on port 80.

246

Abstract

Purpose

To measure the exact size of the world wide web (i.e. a census). The measure used is the number of publicly accessible web servers on port 80.

Design/methodology/approach

Every IP address on the internet is queried for the presence of a web server.

Findings

The census found 18,560,257 web servers.

Research limitations/implications

Any web servers hidden behind a firewall, or that did not respond within a reasonable amount of time (20 seconds) were not counted by the census.

Practical implications

Whenever a server is found, we download and store a copy of its homepage. The resulting database of homepages is a historical snapshot of the web which will be mined for information in the future.

Originality/value

Past web surveys performed by various research groups were only estimates of the size of the web. This is the first time its size has been exactly measured.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 20 December 2007

Abdeslam En‐Nouaary

This paper aims to address formal testing of real‐time systems by providing readers with guidance for generating test cases from timed automata.

Abstract

Purpose

This paper aims to address formal testing of real‐time systems by providing readers with guidance for generating test cases from timed automata.

Design/methodology/approach

In this paper, a set of test selection criteria is presented. Such criteria are useful for testing real‐time systems specified by timed automata. The criteria are introduced after the presentation of timed automata model and the concepts related to it.

Findings

The paper finds that the set of test selection criteria are ordered based on the inclusion relation. The ordering is useful for developing new testing methods and for comparing existing approaches.

Originality/value

Each of the proposed test selection criteria can be used to develop a new method for testing timed automata with certain fault coverage.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 20 December 2007

Bernard J. Jansen, Mimi Zhang and Amanda Spink

To investigate and identify the patterns of interaction between searchers and search engine during web searching.

524

Abstract

Purpose

To investigate and identify the patterns of interaction between searchers and search engine during web searching.

Design/methodology/approach

The authors examined 2,465,145 interactions from 534,507 users of Dogpile.com submitted on May 6, 2005, and compared query reformulation patterns. They investigated the type of query modifications and query modification transitions within sessions.

Findings

The paper identifies three strong query reformulation transition patterns: between specialization and generalization; between video and audio, and between content change and system assistance. In addition, the findings show that web and images content were the most popular media collections.

Originality/value

This research sheds light on the more complex aspects of web searching involving query modifications.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 20 December 2007

Mohammad Eyadat and Dorothy Fisher

The purpose of this research is to examine web accessibility initiative (WAI) guidelines for web accessibility so as to incorporate web accessibility in information systems (IS…

866

Abstract

Purpose

The purpose of this research is to examine web accessibility initiative (WAI) guidelines for web accessibility so as to incorporate web accessibility in information systems (IS) curriculum.

Design/methodology/approach

The authors used the WebXact software accessibility evaluation tool to test the top pages of web sites of the 23 California State University (CSU) campuses in order to identify the level of compliance to federal standards. The authors also designed and conducted a questionnaire to survey the students who were enrolled in the first web development course at CSU, Dominguez Hills to access their knowledge and skills in various web accessibility topics.

Findings

The research findings show that the majority of the CSU campuses' top web pages failed to meet WAI guidelines at some point. Moreover, two‐thirds of the students who responded to the survey have no knowledge of web accessibility topics included in the questionnaires. The results indicate that IS programs failed to incorporate accessibility in their curricula and produce web developers with the skills and knowledge in web accessibility.

Research limitations/implications

The limitation of this research is that the sample size is small. The authors intend to increase the number of universities' web site in the test and survey all students in the IS program in a future study.

Practical implications

This research is background work that will help the authors to incorporate accessibility topics in their web development courses that include web accessibility basic concepts, universal design, Section 508 of the US Rehabilitation Act, web content accessibility guidelines, WAI guidelines for web accessibility, and web accessibility testing tools.

Originality/value

This research improves the current state of web accessibility in curriculum higher education.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 20 December 2007

Isak Taksa, Sarah Zelikovitz and Amanda Spink

The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users.

483

Abstract

Purpose

The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users.

Design/methodology/approach

The authors describe an iterative system that begins with a small set of manually labeled terms, which are used to label queries from the log. A set of background knowledge related to these labeled queries is acquired by combining web search results on these queries. This background set is used to obtain many terms that are related to the classification task. The system then ranks each of the related terms, choosing those that most fit the personal properties of the users. These terms are then used to begin the next iteration.

Findings

The authors identify the difficulties of classifying web logs, by approaching this problem from a machine learning perspective. By applying the approach developed, the authors are able to show that many queries in a large query log can be classified.

Research limitations/implications

Testing results in this type of classification work is difficult, as the true personal properties of web users are unknown. Evaluation of the classification results in terms of the comparison of classified queries to well known age‐related sites is a direction that is currently being exploring.

Practical implications

This research is background work that can be incorporated in search engines or other web‐based applications, to help marketing companies and advertisers.

Originality/value

This research enhances the current state of knowledge in short‐text classification and query log learning.

Details

International Journal of Web Information Systems, vol. 3 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of 12